Tilbake til bloggen

{"en":"AI Copyright Law May Get First Major Overhaul: The Necessity Test","nb":"AI Copyright Law May Get First Major Overhaul: The Necessity Test"}

Håkon Berntsen ·
{"en":"**A new legal framework could reshape how we think about AI-generated content and copyright infringement.**\n\nResearchers have proposed a radically different criterion for determining when generative AI violates copyright: **whether the output could have been created without the copyrighted work in the training data.**\n\n## The Current Problem\n\nCopyright law focuses on \"substantial similarity\"—does the new work look\/sound too much like the original?\n\nBut generative AI breaks this model. An AI can closely imitate an artist's *style* without copying any specific *content*. It can write like Hemingway, paint like Van Gogh, or compose like Mozart—all without reproducing any particular work.\n\nCurrent legal tests don't handle this well, which is why we're seeing major lawsuits from artists, writers, and music publishers against AI companies.\n\n## The Proposed Solution: The Necessity Test\n\n**New criterion:** A generative AI output infringes on an existing work if it **could not have been generated without that work in its training corpus.**\n\nIn other words: If removing a specific artist's work from the training data would make the AI unable to produce a similar output, that's infringement.\n\n## Why This Matters\n\n**For AI Companies:**\n- May require training data audits and provenance tracking\n- Could force transparency about what data went into models\n- Might make some training practices legally risky\n\n**For Artists & Creators:**\n- Provides recourse against style imitation, not just content copying\n- Shifts burden of proof toward demonstrating necessity\n- Protects creative expression, not just individual works\n\n**For Users:**\n- Generated content may need disclaimers about training sources\n- Some AI tools might become limited in what styles they can produce\n- Clearer rules about what's legal to generate and share\n\n## The Technical Challenge\n\nHow do you *prove* a specific training example was necessary?\n\nThe researchers propose modeling generative AI as learning a distribution over possible outputs. If a particular output is unlikely under the model trained without Artist X, but likely when Artist X is included, that suggests necessity.\n\nBut implementation will be complex—and contentious.\n\n## Timeline\n\nThis is a research proposal, not (yet) law. But with ongoing litigation involving Stability AI, Midjourney, OpenAI, and others, expect courts to grapple with these questions throughout 2026.\n\nThe next major copyright decision could cite this framework.\n\n## Source\n- \"Creative Ownership in the Age of AI\" (ArXiv 2602.12270)\\n\\n
\\n\\n

About OpenInfo.no:<\/strong> We run DAVN.ai<\/p>","nb":"**A new legal framework could reshape how we think about AI-generated content and copyright infringement.**\n\nResearchers have proposed a radically different criterion for determining when generative AI violates copyright: **whether the output could have been created without the copyrighted work in the training data.**\n\n## The Current Problem\n\nCopyright law focuses on \"substantial similarity\"—does the new work look\/sound too much like the original?\n\nBut generative AI breaks this model. An AI can closely imitate an artist's *style* without copying any specific *content*. It can write like Hemingway, paint like Van Gogh, or compose like Mozart—all without reproducing any particular work.\n\nCurrent legal tests don't handle this well, which is why we're seeing major lawsuits from artists, writers, and music publishers against AI companies.\n\n## The Proposed Solution: The Necessity Test\n\n**New criterion:** A generative AI output infringes on an existing work if it **could not have been generated without that work in its training corpus.**\n\nIn other words: If removing a specific artist's work from the training data would make the AI unable to produce a similar output, that's infringement.\n\n## Why This Matters\n\n**For AI Companies:**\n- May require training data audits and provenance tracking\n- Could force transparency about what data went into models\n- Might make some training practices legally risky\n\n**For Artists & Creators:**\n- Provides recourse against style imitation, not just content copying\n- Shifts burden of proof toward demonstrating necessity\n- Protects creative expression, not just individual works\n\n**For Users:**\n- Generated content may need disclaimers about training sources\n- Some AI tools might become limited in what styles they can produce\n- Clearer rules about what's legal to generate and share\n\n## The Technical Challenge\n\nHow do you *prove* a specific training example was necessary?\n\nThe researchers propose modeling generative AI as learning a distribution over possible outputs. If a particular output is unlikely under the model trained without Artist X, but likely when Artist X is included, that suggests necessity.\n\nBut implementation will be complex—and contentious.\n\n## Timeline\n\nThis is a research proposal, not (yet) law. But with ongoing litigation involving Stability AI, Midjourney, OpenAI, and others, expect courts to grapple with these questions throughout 2026.\n\nThe next major copyright decision could cite this framework.\n\n## Source\n- \"Creative Ownership in the Age of AI\" (ArXiv 2602.12270)\\n\\n


\\n\\n

About OpenInfo.no:<\/strong> We run DAVN.ai<\/p>"}

Relaterte artikler